skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Passonneau, Rebecca J"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available July 27, 2026
  2. Free, publicly-accessible full text available June 10, 2026
  3. Free, publicly-accessible full text available June 10, 2026
  4. Writing scientific explanations is a core practice in science. However, students find it difficult to write coherent scientific explanations. Additionally, teachers find it challenging to provide real-time feedback on students’ essays. In this study, we discuss how PyrEval, an NLP technology, was used to automatically assess students’ essays and provide feedback. We found that students explained more key ideas in their essays after the automated assessment and feedback. However, there were issues with the automated assessments as well as students’ understanding of the feedback and revising their essays. 
    more » « less
  5. To assess student knowledge, educators face a tradeoff between open-ended versus fixed response questions. Open-ended questions are easier to formulate, and provide greater insight into student learning,vbut are burdensome. Machine learning methods that could reduce the assessment burden also have a cost, given that large datasets of reliably assessed examples (labeled data) are required for training and testing. We address the human costs of assessment and data labeling using selective prediction, where the output of a machine learned model is used when the model makes a confident decision, but otherwise the model defers to a human decision-maker. The goal is to defer less often while maintaining human assessment quality on the total output. We refer to the deferral criteria as a deferral policy, and we show it is possible to learn when to defer. We first trained an autograder on a combination of historical data and a small amount of newly labeled data, achieving moderate performance. We then used the autograder output as input to a logistic regression to learn when to defer. The learned logistic regression equation constitutes a deferral policy. Tests of the selective prediction method on a held out test set showed that human-level assessment quality can be achieved with a major reduction of human effort. 
    more » « less
  6. In principle, educators can use writing to scaffold students’ understanding of increasingly complex science ideas. In practice, formative assessment of students’ science writing is very labor intensive. We present PyrEval+CR, an automated tool for formative assessment of middle school students’ science essays. It identifies each idea in a student’s science essay, and its importance in the curriculum. 
    more » « less
  7. This is a contribution to a Symposium This symposium will provide opportunities for discussion about how Artificial Intelligence can support ambitious learning practices in CSCL. To the extent that CSCL can be a lever for educational equitable educational change, AI needs to be able to support the kinds of practices that afford agency to students and teachers. However, AI also brings to the fore the need to consider equity and ethics. This interactive session will provide opportunities to discuss these issues in the context of the examples presented here. Our contribution is focused on two participatory design studies we conducted with 14 teachers to understand the kinds of automatic feedback they thought would support their students’ science explanation writing as well as how they would like summaries of information from students’ writing presented in a teacher’s dashboard. We also discuss how we developed our system, PyrEval, for automated writing support based on historical data and scoring from manual coding rubrics. 
    more » « less
  8. Science writing skills depend on a student’s ability to co-ordinate conceptual understanding of science with the ability to articulate ideas independently, and to distinguish between gradations of importance in ideas. Real-time scaffolding of student writing during and immediately after the writing process could ease the cognitive burden of learning to co-ordinate these skills and enhance student learning of science. This paper presents a design process for automated support of real-time scaffolding of middle school students’ science explanations. We describe our adaptation of an existing tool for automatic content assessment to align more closely with a rubric, and our reliance on data mining of historical examples of middle school science writing. On a reserved test set of semi-synthetic examples of science explanations, the modified tool demonstrated high correlation with the manual rubric. We conclude the tool can support a wide range of design options for customized student feedback in real time. 
    more » « less